270 research outputs found

    Smart nanotextiles: materials and their application

    Get PDF
    Textiles are ubiquitous to us, enveloping our skin and surroundings. Not only do they provide a protective shield or act as a comforting cocoon but they also serve esthetic appeal and cultural importance. Recent technologies have allowed the traditional functionality of textiles to be extended. Advances in materials science have added intelligence to textiles and created ‘smart’ clothes. Smart textiles can sense and react to environmental conditions or stimuli, e.g., from mechanical, thermal, chemical, electrical, or magnetic sources (Lam Po Tang and Stylios 2006). Such textiles find uses in many applications ranging from military and security to personalized healthcare, hygiene, and entertainment. Smart textiles may be termed ‘‘passive’’ or ‘‘active.’’ A passive smart textile monitors the wearer’s physiology or the environment, e.g., a shirt with in-built thermistors to log body temperature over time. If actuators are integrated, the textile becomes an active, smart textile as it may respond to a particular stimulus, e.g., the temperature-aware shirt may automatically roll up the sleeves when body temperature rises. The fundamental components in any smart textile are sensors and actuators. Interconnections, power supply, and a control unit are also needed to complete the system. All these components must be integrated into textiles while still retaining the usual tactile, flexible, and comfortable properties that we expect from a textile. Adding new functionalities to textiles while still maintaining the look and feel of the fabric is where nanotechnology has a huge impact on the textile industry. This article describes current developments in materials for smart nanotextiles and some of the many applications where these innovative textiles are of great benefit

    Finding help for OCD in Australia : development and evaluation of a clinician directory

    Get PDF
    Objective: People tend to live with obsessive-compulsive disorder (OCD) for many years before receiving evidence-based treatment. This delay is partly due to a lack of access to information about which healthcare providers offer evidence-based treatment for OCD. This information was not easily accessible online for people in Australia. Methods: In this study, we describe how an online directory of clinicians was developed and evaluated. We report on a needs analysis and survey of treatment-seeking histories among consumers and carers impacted by OCD. We describe the key features of the directory developed, and present survey feedback on its usability and utility. Results: The results validated the need for a directory specific to clinicians who offer evidence-based treatment for OCD, and that it meets essential usability standards. Areas for improvement and further developments were identified. Conclusion: This directory contributes to broader efforts invested to improve the treatment-seeking process for people living with OCD in Australia

    Fabrication of Copper Window Electrodes with ≈10<sup>8</sup>Apertures cm<sup>−2</sup> for Organic Photovoltaics

    Get PDF
    A powerful approach to increasing the far‐field transparency of copper film window electrodes which simultaneously reduces intraband absorption losses for wavelengths 550 nm is reported. The approach is based on incorporation of a random array of ≈100 million circular apertures per cm2 into an optically thin copper film, with a mean aperture diameter of ≈500 nm. A method for the fabrication of these electrodes is described that exploits a binary polymer blend mask that self‐organizes at room temperature from a single solution, and so is simple to implement. Additionally all of the materials used in electrode fabrication are low cost, low toxicity, and widely available. It is shown that these nanostructured copper electrodes offer an average far‐field transparency of ≥80% and sheet resistance of ≤10 Ω sq−1 when used in conjunction with a conventional solution processed ZnO electron transport layer and their utility in inverted organic photovoltaic devices is demonstrated

    Factors Affecting Intention to Receive and Self-Reported Receipt of 2009 Pandemic (H1N1) Vaccine in Hong Kong: A Longitudinal Study

    Get PDF
    Background: Vaccination was a core component for mitigating the 2009 influenza pandemic (pH1N1). However, a vaccination program's efficacy largely depends on population compliance. We examined general population decision-making for pH1N1 vaccination using a modified Theory of Planned Behaviour (TBP). Methodology: We conducted a longitudinal study, collecting data before and after the introduction of pH1N1 vaccine in Hong Kong. Structural equation modeling (SEM) tested if a modified TPB had explanatory utility for vaccine uptake among adults. Principal Findings: Among 896 subjects who completed both the baseline and the follow-up surveys, 7% (67/896) reported being "likely/very likely/certain" to be vaccinated (intent) but two months later only 0.8% (7/896) reported having received pH1N1 vaccination. Perception of low risk from pH1N1 (60%) and concerns regarding adverse effects of the vaccine (37%) were primary justifications for avoiding pH1N1 vaccination. Greater perceived vaccine benefits (β = 0.15), less concerns regarding vaccine side-effects (β = -0.20), greater adherence to social norms of vaccination (β = 0.39), anticipated higher regret if not vaccinated (β = 0.47), perceived higher self-efficacy for vaccination (β = 0.12) and history of seasonal influenza vaccination (β = 0.12) were associated with higher intention to receive the pH1N1 vaccine, which in turn predicted self-reported vaccination uptake (β = 0.30). Social norm (β = 0.70), anticipated regret (β = 0.19) and vaccination intention (β = 0.31) were positively associated with, and accounted for 70% of variance in vaccination planning, which, in turn subsequently predicted self-reported vaccination uptake (β = 0.36) accounting for 36% of variance in reported vaccination behaviour. Conclusions/Significance: Perceived low risk from pH1N1 and perceived high risk from pH1N1 vaccine inhibited pH1N1 vaccine uptake. Both the TPB and the additional components contributed to intended vaccination uptake but social norms and anticipated regret predominantly associated with vaccination intention and planning. Vaccination planning is a more significant proximal determinant of uptake of pH1N1 vaccine than is intention. Intention alone is an unreliable predictor of future vaccine uptake. © 2011 Liao et al.published_or_final_versio

    Operation and performance of the ATLAS Tile Calorimeter in Run 1

    Get PDF
    The Tile Calorimeter is the hadron calorimeter covering the central region of the ATLAS experiment at the Large Hadron Collider. Approximately 10,000 photomultipliers collect light from scintillating tiles acting as the active material sandwiched between slabs of steel absorber. This paper gives an overview of the calorimeter’s performance during the years 2008–2012 using cosmic-ray muon events and proton–proton collision data at centre-of-mass energies of 7 and 8TeV with a total integrated luminosity of nearly 30 fb−1. The signal reconstruction methods, calibration systems as well as the detector operation status are presented. The energy and time calibration methods performed excellently, resulting in good stability of the calorimeter response under varying conditions during the LHC Run 1. Finally, the Tile Calorimeter response to isolated muons and hadrons as well as to jets from proton–proton collisions is presented. The results demonstrate excellent performance in accord with specifications mentioned in the Technical Design Report

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Measurements of differential cross-sections in top-quark pair events with a high transverse momentum top quark and limits on beyond the Standard Model contributions to top-quark pair production with the ATLAS detector at √s = 13 TeV

    Get PDF
    Cross-section measurements of top-quark pair production where the hadronically decaying top quark has transverse momentum greater than 355 GeV and the other top quark decays into ℓνb are presented using 139 fb−1 of data collected by the ATLAS experiment during proton-proton collisions at the LHC. The fiducial cross-section at s = 13 TeV is measured to be σ = 1.267 ± 0.005 ± 0.053 pb, where the uncertainties reflect the limited number of data events and the systematic uncertainties, giving a total uncertainty of 4.2%. The cross-section is measured differentially as a function of variables characterising the tt¯ system and additional radiation in the events. The results are compared with various Monte Carlo generators, including comparisons where the generators are reweighted to match a parton-level calculation at next-to-next-to-leading order. The reweighting improves the agreement between data and theory. The measured distribution of the top-quark transverse momentum is used to search for new physics in the context of the effective field theory framework. No significant deviation from the Standard Model is observed and limits are set on the Wilson coefficients of the dimension-six operators OtG and Otq(8), where the limits on the latter are the most stringent to date. [Figure not available: see fulltext.]

    Determination of the parton distribution functions of the proton using diverse ATLAS data from pp collisions at √s = 7, 8 and 13 TeV

    Get PDF
    This paper presents an analysis at next-to-next-to-leading order in the theory of quantum chromodynamics for the determination of a new set of proton parton distribution functions using diverse measurements in pp collisions at \sqrt{s} = 7, 8 and 13 TeV, performed by the ATLAS experiment at the Large Hadron Collider, together with deep inelastic scattering data from ep collisions at the HERA collider. The ATLAS data sets considered are differential cross-section measurements of inclusive W^{±} and Z/gamma^{*} boson production, W^{±} and Z boson production in association with jets, t\bar{t} production, inclusive jet production and direct photon production. In the analysis, particular attention is paid to the correlation of systematic uncertainties within and between the various ATLAS data sets and to the impact of model, theoretical and parameterisation uncertainties. The resulting set of parton distribution functions is called ATLASpdf21

    Modelling and computational improvements to the simulation of single vector-boson plus jet processes for the ATLAS experiment

    Get PDF
    This paper presents updated Monte Carlo configurations used to model the production of single electroweak vector bosons (W, Z/gamma*) in association with jets in proton-proton collisions for the ATLAS experiment at the Large Hadron Collider. Improvements pertaining to the electroweak input scheme, parton-shower splitting kernels and scale-setting scheme are shown for multi-jet merged configurations accurate to next-to-leading order in the strong and electroweak couplings. The computational resources required for these set-ups are assessed, and approximations are introduced resulting in a factor three reduction of the per-event CPU time without affecting the physics modelling performance. Continuous statistical enhancement techniques are introduced by ATLAS in order to populate low cross-section regions of phase space and are shown to match or exceed the generated effective luminosity. This, together with the lower per-event CPU time, results in a 50% reduction in the required computing resources compared to a legacy set-up previously used by the ATLAS collaboration. The set-ups described in this paper will be used for future ATLAS analyses and lay the foundation for the next generation of Monte Carlo predictions for single vector-boson plus jets production
    corecore